## Features Detection¶

Canny edge detection¶

Edge detection is an image-processing technique, which is used to identify the boundaries (edges) of objects, or regions within an image. Edges are among the most important features associated with images. We come to know of the underlying structure of an image through its edges. Computer vision processing pipelines therefore extensively use edge detection in applications.

Canny Edge Detection is a popular edge detection algorithm. It was developed by John F. Canny:

  • It is a multi-stage algorithm and we will go through each stages.
  • Since edge detection is susceptible to noise in the image, first step is to remove the noise in the image with a 5x5 Gaussian filter. We have already seen this in previous chapters.
  • Smoothened image is then filtered with a Sobel kernel in both horizontal and vertical direction to get first derivative in horizontal direction ( Gx) and vertical direction ( Gy). From these two images, we can find edge gradient and direction for each pixel as follows:

image.png

  • Gradient direction is always perpendicular to edges. It is rounded to one of four angles representing vertical, horizontal and two diagonal directions.
  • After getting gradient magnitude and direction, a full scan of image is done to remove any unwanted pixels which may not constitute the edge. For this, at every pixel, pixel is checked if it is a local maximum in its neighborhood in the direction of gradient. Check the image below:

image.png

  • Point A is on the edge ( in vertical direction). Gradient direction is normal to the edge. Point B and C are in gradient directions. So point A is checked with point B and C to see if it forms a local maximum. If so, it is considered for next stage, otherwise, it is suppressed ( put to zero). In short, the result you get is a binary image with "thin edges".
  • This stage decides which are all edges are really edges and which are not. For this, we need two threshold values, minVal and maxVal. Any edges with intensity gradient more than maxVal are sure to be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to "sure-edge" pixels, they are considered to be part of edges. Otherwise, they are also discarded. See the image below:

image.png

  • The edge A is above the maxVal, so considered as "sure-edge". Although edge C is below maxVal, it is connected to edge A, so that also considered as valid edge and we get that full curve. But edge B, although it is above minVal and is in same region as that of edge C, it is not connected to any "sure-edge", so that is discarded. So it is very important that we have to select minVal and maxVal accordingly to get the correct result. This stage also removes small pixels noises on the assumption that edges are long lines.

So what we finally get is strong edges in the image.

The following is the syntax for applying Canny edge detection using OpenCV:

image.png

For these examples we are going to select the data in Google Earth Engine and download it to Google Drive.

Let's install the necessary packages:

In [ ]:
!pip install ipygee
!pip install rasterio
Collecting ipygee
  Downloading ipygee-0.0.18.tar.gz (37 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: ipyleaflet>=0.10.2 in /usr/local/lib/python3.10/dist-packages (from ipygee) (0.18.2)
Collecting pygal (from ipygee)
  Downloading pygal-3.0.5-py3-none-any.whl.metadata (3.5 kB)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from ipygee) (2.1.4)
Collecting geetools (from ipygee)
  Downloading geetools-1.4.0-py3-none-any.whl.metadata (6.4 kB)
Requirement already satisfied: ipywidgets<9,>=7.6.0 in /usr/local/lib/python3.10/dist-packages (from ipyleaflet>=0.10.2->ipygee) (7.7.1)
Requirement already satisfied: traittypes<3,>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from ipyleaflet>=0.10.2->ipygee) (0.2.1)
Requirement already satisfied: xyzservices>=2021.8.1 in /usr/local/lib/python3.10/dist-packages (from ipyleaflet>=0.10.2->ipygee) (2024.9.0)
Requirement already satisfied: branca>=0.5.0 in /usr/local/lib/python3.10/dist-packages (from ipyleaflet>=0.10.2->ipygee) (0.7.2)
Collecting anyascii (from geetools->ipygee)
  Downloading anyascii-0.3.2-py3-none-any.whl.metadata (1.5 kB)
Collecting deprecated (from geetools->ipygee)
  Downloading Deprecated-1.2.14-py2.py3-none-any.whl.metadata (5.4 kB)
Requirement already satisfied: earthengine-api>=0.1.397 in /usr/local/lib/python3.10/dist-packages (from geetools->ipygee) (1.0.0)
Collecting ee-extra (from geetools->ipygee)
  Downloading ee_extra-0.0.15.tar.gz (224 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 224.7/224.7 kB 12.5 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Requirement already satisfied: geopandas in /usr/local/lib/python3.10/dist-packages (from geetools->ipygee) (0.14.4)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from geetools->ipygee) (3.7.1)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from geetools->ipygee) (2.32.3)
Collecting xee>=0.0.11 (from geetools->ipygee)
  Downloading xee-0.0.15-py3-none-any.whl.metadata (5.7 kB)
Collecting yamlable (from geetools->ipygee)
  Downloading yamlable-1.1.1-py2.py3-none-any.whl.metadata (3.1 kB)
Requirement already satisfied: numpy<2,>=1.22.4 in /usr/local/lib/python3.10/dist-packages (from pandas->ipygee) (1.26.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas->ipygee) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->ipygee) (2024.1)
Requirement already satisfied: tzdata>=2022.1 in /usr/local/lib/python3.10/dist-packages (from pandas->ipygee) (2024.1)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.10/dist-packages (from pygal->ipygee) (8.4.0)
Requirement already satisfied: jinja2>=3 in /usr/local/lib/python3.10/dist-packages (from branca>=0.5.0->ipyleaflet>=0.10.2->ipygee) (3.1.4)
Requirement already satisfied: google-cloud-storage in /usr/local/lib/python3.10/dist-packages (from earthengine-api>=0.1.397->geetools->ipygee) (2.8.0)
Requirement already satisfied: google-api-python-client>=1.12.1 in /usr/local/lib/python3.10/dist-packages (from earthengine-api>=0.1.397->geetools->ipygee) (2.137.0)
Requirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.10/dist-packages (from earthengine-api>=0.1.397->geetools->ipygee) (2.27.0)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.10/dist-packages (from earthengine-api>=0.1.397->geetools->ipygee) (0.2.0)
Requirement already satisfied: httplib2<1dev,>=0.9.2 in /usr/local/lib/python3.10/dist-packages (from earthengine-api>=0.1.397->geetools->ipygee) (0.22.0)
Requirement already satisfied: ipykernel>=4.5.1 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (5.5.6)
Requirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.2.0)
Requirement already satisfied: traitlets>=4.3.1 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (5.7.1)
Requirement already satisfied: widgetsnbextension~=3.6.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (3.6.9)
Requirement already satisfied: ipython>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (7.34.0)
Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (3.0.13)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.2->pandas->ipygee) (1.16.0)
Requirement already satisfied: xarray[parallel] in /usr/local/lib/python3.10/dist-packages (from xee>=0.0.11->geetools->ipygee) (2024.6.0)
Requirement already satisfied: pyproj in /usr/local/lib/python3.10/dist-packages (from xee>=0.0.11->geetools->ipygee) (3.6.1)
Collecting affine (from xee>=0.0.11->geetools->ipygee)
  Downloading affine-2.4.0-py3-none-any.whl.metadata (4.0 kB)
Requirement already satisfied: wrapt<2,>=1.10 in /usr/local/lib/python3.10/dist-packages (from deprecated->geetools->ipygee) (1.16.0)
Requirement already satisfied: fiona>=1.8.21 in /usr/local/lib/python3.10/dist-packages (from geopandas->geetools->ipygee) (1.10.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from geopandas->geetools->ipygee) (24.1)
Requirement already satisfied: shapely>=1.8.0 in /usr/local/lib/python3.10/dist-packages (from geopandas->geetools->ipygee) (2.0.6)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.10/dist-packages (from importlib-metadata->pygal->ipygee) (3.20.1)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->geetools->ipygee) (1.3.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->geetools->ipygee) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->geetools->ipygee) (4.53.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->geetools->ipygee) (1.4.7)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->geetools->ipygee) (9.4.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->geetools->ipygee) (3.1.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->geetools->ipygee) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->geetools->ipygee) (3.8)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->geetools->ipygee) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->geetools->ipygee) (2024.8.30)
Requirement already satisfied: PyYaml in /usr/local/lib/python3.10/dist-packages (from yamlable->geetools->ipygee) (6.0.2)
Requirement already satisfied: attrs>=19.2.0 in /usr/local/lib/python3.10/dist-packages (from fiona>=1.8.21->geopandas->geetools->ipygee) (24.2.0)
Requirement already satisfied: click~=8.0 in /usr/local/lib/python3.10/dist-packages (from fiona>=1.8.21->geopandas->geetools->ipygee) (8.1.7)
Requirement already satisfied: click-plugins>=1.0 in /usr/local/lib/python3.10/dist-packages (from fiona>=1.8.21->geopandas->geetools->ipygee) (1.1.1)
Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.10/dist-packages (from fiona>=1.8.21->geopandas->geetools->ipygee) (0.7.2)
Requirement already satisfied: google-api-core!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0.dev0,>=1.31.5 in /usr/local/lib/python3.10/dist-packages (from google-api-python-client>=1.12.1->earthengine-api>=0.1.397->geetools->ipygee) (2.19.2)
Requirement already satisfied: uritemplate<5,>=3.0.1 in /usr/local/lib/python3.10/dist-packages (from google-api-python-client>=1.12.1->earthengine-api>=0.1.397->geetools->ipygee) (4.1.1)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from google-auth>=1.4.1->earthengine-api>=0.1.397->geetools->ipygee) (5.5.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth>=1.4.1->earthengine-api>=0.1.397->geetools->ipygee) (0.4.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth>=1.4.1->earthengine-api>=0.1.397->geetools->ipygee) (4.9)
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.10/dist-packages (from ipykernel>=4.5.1->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (6.1.12)
Requirement already satisfied: tornado>=4.2 in /usr/local/lib/python3.10/dist-packages (from ipykernel>=4.5.1->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (6.3.3)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (71.0.4)
Collecting jedi>=0.16 (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee)
  Using cached jedi-0.19.1-py2.py3-none-any.whl.metadata (22 kB)
Requirement already satisfied: decorator in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (4.4.2)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.7.5)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (3.0.47)
Requirement already satisfied: pygments in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (2.16.1)
Requirement already satisfied: backcall in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.2.0)
Requirement already satisfied: matplotlib-inline in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.1.7)
Requirement already satisfied: pexpect>4.3 in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (4.9.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2>=3->branca>=0.5.0->ipyleaflet>=0.10.2->ipygee) (2.1.5)
Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.10/dist-packages (from widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (6.5.5)
Requirement already satisfied: google-cloud-core<3.0dev,>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from google-cloud-storage->earthengine-api>=0.1.397->geetools->ipygee) (2.4.1)
Requirement already satisfied: google-resumable-media>=2.3.2 in /usr/local/lib/python3.10/dist-packages (from google-cloud-storage->earthengine-api>=0.1.397->geetools->ipygee) (2.7.2)
Requirement already satisfied: dask[complete] in /usr/local/lib/python3.10/dist-packages (from xarray[parallel]->xee>=0.0.11->geetools->ipygee) (2024.7.1)
Requirement already satisfied: googleapis-common-protos<2.0.dev0,>=1.56.2 in /usr/local/lib/python3.10/dist-packages (from google-api-core!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0.dev0,>=1.31.5->google-api-python-client>=1.12.1->earthengine-api>=0.1.397->geetools->ipygee) (1.65.0)
Requirement already satisfied: protobuf!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0.dev0,>=3.19.5 in /usr/local/lib/python3.10/dist-packages (from google-api-core!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0.dev0,>=1.31.5->google-api-python-client>=1.12.1->earthengine-api>=0.1.397->geetools->ipygee) (3.20.3)
Requirement already satisfied: proto-plus<2.0.0dev,>=1.22.3 in /usr/local/lib/python3.10/dist-packages (from google-api-core!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0,<3.0.0.dev0,>=1.31.5->google-api-python-client>=1.12.1->earthengine-api>=0.1.397->geetools->ipygee) (1.24.0)
Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /usr/local/lib/python3.10/dist-packages (from google-resumable-media>=2.3.2->google-cloud-storage->earthengine-api>=0.1.397->geetools->ipygee) (1.6.0)
Requirement already satisfied: parso<0.9.0,>=0.8.3 in /usr/local/lib/python3.10/dist-packages (from jedi>=0.16->ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.8.4)
Requirement already satisfied: pyzmq<25,>=17 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (24.0.1)
Requirement already satisfied: argon2-cffi in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (23.1.0)
Requirement already satisfied: jupyter-core>=4.6.1 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (5.7.2)
Requirement already satisfied: nbformat in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (5.10.4)
Requirement already satisfied: nbconvert>=5 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (6.5.4)
Requirement already satisfied: nest-asyncio>=1.5 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.6.0)
Requirement already satisfied: Send2Trash>=1.8.0 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.8.3)
Requirement already satisfied: terminado>=0.8.3 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.18.1)
Requirement already satisfied: prometheus-client in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.20.0)
Requirement already satisfied: nbclassic>=0.4.7 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.1.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.10/dist-packages (from pexpect>4.3->ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.10/dist-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.2.13)
Requirement already satisfied: pyasn1<0.7.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=1.4.1->earthengine-api>=0.1.397->geetools->ipygee) (0.6.0)
Requirement already satisfied: cloudpickle>=1.5.0 in /usr/local/lib/python3.10/dist-packages (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (2.2.1)
Requirement already satisfied: fsspec>=2021.09.0 in /usr/local/lib/python3.10/dist-packages (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (2024.6.1)
Requirement already satisfied: partd>=1.4.0 in /usr/local/lib/python3.10/dist-packages (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (1.4.2)
Requirement already satisfied: toolz>=0.10.0 in /usr/local/lib/python3.10/dist-packages (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (0.12.1)
Requirement already satisfied: pyarrow>=7.0 in /usr/local/lib/python3.10/dist-packages (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (14.0.2)
Requirement already satisfied: pyarrow-hotfix in /usr/local/lib/python3.10/dist-packages (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (0.6)
Collecting lz4>=4.3.2 (from dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee)
  Downloading lz4-4.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.7 kB)
Requirement already satisfied: platformdirs>=2.5 in /usr/local/lib/python3.10/dist-packages (from jupyter-core>=4.6.1->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (4.3.2)
Requirement already satisfied: notebook-shim>=0.2.3 in /usr/local/lib/python3.10/dist-packages (from nbclassic>=0.4.7->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.2.4)
Requirement already satisfied: lxml in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (4.9.4)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (4.12.3)
Requirement already satisfied: bleach in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (6.1.0)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.7.1)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.4)
Requirement already satisfied: jupyterlab-pygments in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.3.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.8.4)
Requirement already satisfied: nbclient>=0.5.0 in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.10.0)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.5.1)
Requirement already satisfied: tinycss2 in /usr/local/lib/python3.10/dist-packages (from nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.3.0)
Requirement already satisfied: fastjsonschema>=2.15 in /usr/local/lib/python3.10/dist-packages (from nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (2.20.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.10/dist-packages (from nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (4.23.0)
Requirement already satisfied: locket in /usr/local/lib/python3.10/dist-packages (from partd>=1.4.0->dask[complete]; extra == "parallel"->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (1.0.0)
Requirement already satisfied: argon2-cffi-bindings in /usr/local/lib/python3.10/dist-packages (from argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (21.2.0)
Collecting dask-expr<1.2,>=1.1 (from dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee)
  Downloading dask_expr-1.1.13-py3-none-any.whl.metadata (2.5 kB)
Requirement already satisfied: bokeh>=2.4.2 in /usr/local/lib/python3.10/dist-packages (from dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (3.4.3)
Requirement already satisfied: distributed==2024.7.1 in /usr/local/lib/python3.10/dist-packages (from dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (2024.7.1)
Requirement already satisfied: msgpack>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from distributed==2024.7.1->dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (1.0.8)
Requirement already satisfied: psutil>=5.7.2 in /usr/local/lib/python3.10/dist-packages (from distributed==2024.7.1->dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (5.9.5)
Requirement already satisfied: sortedcontainers>=2.0.5 in /usr/local/lib/python3.10/dist-packages (from distributed==2024.7.1->dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (2.4.0)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from distributed==2024.7.1->dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (3.0.0)
Requirement already satisfied: zict>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from distributed==2024.7.1->dask[complete]->xarray[parallel]->xee>=0.0.11->geetools->ipygee) (3.0.0)
INFO: pip is looking at multiple versions of dask-expr to determine which version is compatible with other requirements. This could take a while.
  Downloading dask_expr-1.1.12-py3-none-any.whl.metadata (2.5 kB)
  Downloading dask_expr-1.1.11-py3-none-any.whl.metadata (2.5 kB)
  Downloading dask_expr-1.1.10-py3-none-any.whl.metadata (2.5 kB)
  Downloading dask_expr-1.1.9-py3-none-any.whl.metadata (2.5 kB)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (2023.12.1)
Requirement already satisfied: referencing>=0.28.4 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.35.1)
Requirement already satisfied: rpds-py>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.20.0)
Requirement already satisfied: jupyter-server<3,>=1.8 in /usr/local/lib/python3.10/dist-packages (from notebook-shim>=0.2.3->nbclassic>=0.4.7->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.24.0)
Requirement already satisfied: cffi>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from argon2-cffi-bindings->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.17.1)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (2.6)
Requirement already satisfied: webencodings in /usr/local/lib/python3.10/dist-packages (from bleach->nbconvert>=5->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (0.5.1)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (2.22)
Requirement already satisfied: anyio<4,>=3.1.0 in /usr/local/lib/python3.10/dist-packages (from jupyter-server<3,>=1.8->notebook-shim>=0.2.3->nbclassic>=0.4.7->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (3.7.1)
Requirement already satisfied: websocket-client in /usr/local/lib/python3.10/dist-packages (from jupyter-server<3,>=1.8->notebook-shim>=0.2.3->nbclassic>=0.4.7->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.8.0)
Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.10/dist-packages (from anyio<4,>=3.1.0->jupyter-server<3,>=1.8->notebook-shim>=0.2.3->nbclassic>=0.4.7->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.3.1)
Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<4,>=3.1.0->jupyter-server<3,>=1.8->notebook-shim>=0.2.3->nbclassic>=0.4.7->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<9,>=7.6.0->ipyleaflet>=0.10.2->ipygee) (1.2.2)
Downloading geetools-1.4.0-py3-none-any.whl (104 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 104.8/104.8 kB 2.8 MB/s eta 0:00:00
Downloading pygal-3.0.5-py3-none-any.whl (129 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 129.5/129.5 kB 4.9 MB/s eta 0:00:00
Downloading xee-0.0.15-py3-none-any.whl (30 kB)
Downloading anyascii-0.3.2-py3-none-any.whl (289 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 289.9/289.9 kB 8.3 MB/s eta 0:00:00
Downloading Deprecated-1.2.14-py2.py3-none-any.whl (9.6 kB)
Downloading yamlable-1.1.1-py2.py3-none-any.whl (14 kB)
Downloading affine-2.4.0-py3-none-any.whl (15 kB)
Using cached jedi-0.19.1-py2.py3-none-any.whl (1.6 MB)
Downloading lz4-4.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 16.8 MB/s eta 0:00:00
Downloading dask_expr-1.1.9-py3-none-any.whl (241 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 241.9/241.9 kB 5.1 MB/s eta 0:00:00
Building wheels for collected packages: ipygee, ee-extra
  Building wheel for ipygee (setup.py) ... done
  Created wheel for ipygee: filename=ipygee-0.0.18-py3-none-any.whl size=41203 sha256=0202166f91a00c96f3f56e7fb95a0743c8dbcc5440d35cde218dd4637ef3b9e8
  Stored in directory: /root/.cache/pip/wheels/6c/50/ac/d40610837fb4e308655017ff83aa14f9cb45adcbbc5eb501f0
  Building wheel for ee-extra (setup.py) ... done
  Created wheel for ee-extra: filename=ee_extra-0.0.15-py3-none-any.whl size=236753 sha256=3d4e7a87d1733a5680b24e4c74be8223e22a54e71f329a7a8fb9c19cebc268eb
  Stored in directory: /root/.cache/pip/wheels/29/96/0e/4e36b0dfd85e16867723df739294c0aa45a65b191adac4d959
Successfully built ipygee ee-extra
Installing collected packages: yamlable, lz4, jedi, deprecated, anyascii, affine, pygal, dask-expr, xee, ee-extra, geetools, ipygee
Successfully installed affine-2.4.0 anyascii-0.3.2 dask-expr-1.1.9 deprecated-1.2.14 ee-extra-0.0.15 geetools-1.4.0 ipygee-0.0.18 jedi-0.19.1 lz4-4.3.3 pygal-3.0.5 xee-0.0.15 yamlable-1.1.1
Collecting rasterio
  Downloading rasterio-1.3.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (14 kB)
Requirement already satisfied: affine in /usr/local/lib/python3.10/dist-packages (from rasterio) (2.4.0)
Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from rasterio) (24.2.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from rasterio) (2024.8.30)
Requirement already satisfied: click>=4.0 in /usr/local/lib/python3.10/dist-packages (from rasterio) (8.1.7)
Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.10/dist-packages (from rasterio) (0.7.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from rasterio) (1.26.4)
Collecting snuggs>=1.4.1 (from rasterio)
  Downloading snuggs-1.4.7-py3-none-any.whl.metadata (3.4 kB)
Requirement already satisfied: click-plugins in /usr/local/lib/python3.10/dist-packages (from rasterio) (1.1.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from rasterio) (71.0.4)
Requirement already satisfied: pyparsing>=2.1.6 in /usr/local/lib/python3.10/dist-packages (from snuggs>=1.4.1->rasterio) (3.1.4)
Downloading rasterio-1.3.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (21.7 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.7/21.7 MB 35.9 MB/s eta 0:00:00
Downloading snuggs-1.4.7-py3-none-any.whl (5.4 kB)
Installing collected packages: snuggs, rasterio
Successfully installed rasterio-1.3.11 snuggs-1.4.7

Now we authenticate the Google Earth Engine API for python with the access key we receive when logging in with the email registered in GEE:

In [ ]:
import ee
ee.Authenticate()
ee.Initialize(project='my-project-1527255156007')

We'll do the same with Drive. It is essential to use the same gmail account for GEE and Drive/Colab authentication:

In [ ]:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive

Let's import the packages:

In [ ]:
import folium
from folium import plugins
from IPython.display import Image
import rasterio
import numpy as np
from matplotlib import pyplot as plt
import cv2

Let's delimit a study area and use bounds coordinates to use in our example:

In [ ]:
AOI = ee.Geometry.Polygon(
        [[[-116.93589360092516, 43.70269397034583],
          [-116.93589360092516, 43.597364844788316],
          [-116.73470646713609, 43.597364844788316],
          [-116.73470646713609, 43.70269397034583]]])

The next step is to select a start date and an end date for the Sentinel 2 satellite imagery search:

In [ ]:
startDateviz = ee.Date.fromYMD(2020,7,1);
endDateviz = ee.Date.fromYMD(2020,7,29);
collectionviz = ee.ImageCollection("COPERNICUS/S2_HARMONIZED").filterDate(startDateviz,endDateviz).filterBounds(AOI).filterMetadata('CLOUDY_PIXEL_PERCENTAGE', 'less_than', 10);

Then we select the median mosaic of the existing images and set the parameters for visualization:

In [ ]:
S2 = collectionviz.median().clip(AOI).divide(10000)
vis_params = {'min': 0, 'max': 0.4, 'bands': ['B4', 'B3','B2']}

Here we define some basemaps to use in Folium:

In [ ]:
basemaps = {
    'Google Satellite': folium.TileLayer(
        tiles = 'https://mt1.google.com/vt/lyrs=s&x={x}&y={y}&z={z}',
        attr = 'Google',
        name = 'Google Satellite',
        overlay = True,
        control = True
    ),
    'Google Terrain': folium.TileLayer(
        tiles = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}',
        attr = 'Google',
        name = 'Google Terrain',
        overlay = True,
        control = True
    )}

The add_ee_layer function will make it easy to add vector and image layers to Folium:

In [ ]:
def add_ee_layer(self, ee_object, vis_params, name):

    try:
        # display ee.Image()
        if isinstance(ee_object, ee.image.Image):
            map_id_dict = ee.Image(ee_object).getMapId(vis_params)
            folium.raster_layers.TileLayer(
            tiles = map_id_dict['tile_fetcher'].url_format,
            attr = 'Google Earth Engine',
            name = name,
            overlay = True,
            control = True
            ).add_to(self)
        # display ee.ImageCollection()
        elif isinstance(ee_object, ee.imagecollection.ImageCollection):
            ee_object_new = ee_object.mosaic()
            map_id_dict = ee.Image(ee_object_new).getMapId(vis_params)
            folium.raster_layers.TileLayer(
            tiles = map_id_dict['tile_fetcher'].url_format,
            attr = 'Google Earth Engine',
            name = name,
            overlay = True,
            control = True
            ).add_to(self)
        # display ee.Geometry()
        elif isinstance(ee_object, ee.geometry.Geometry):
            folium.GeoJson(
            data = ee_object.getInfo(),
            name = name,
            overlay = True,
            control = True
        ).add_to(self)
        # display ee.FeatureCollection()
        elif isinstance(ee_object, ee.featurecollection.FeatureCollection):
            ee_object_new = ee.Image().paint(ee_object, 0, 2)
            map_id_dict = ee.Image(ee_object_new).getMapId(vis_params)
            folium.raster_layers.TileLayer(
            tiles = map_id_dict['tile_fetcher'].url_format,
            attr = 'Google Earth Engine',
            name = name,
            overlay = True,
            control = True
        ).add_to(self)

    except:
        print("Could not display {}".format(name))

# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer

So we can plot our Sentinel 2 image of the selected area using Folium.

In [ ]:
my_map = folium.Map(location=[43.648686183744765, -116.83238179062242], zoom_start=12)

# Add custom basemaps
basemaps['Google Terrain'].add_to(my_map)

# Add the elevation model to the map object.
my_map.add_ee_layer(S2, vis_params, 'Sentinel 2')
my_map.add_child(folium.LayerControl())

# Display the map.
display(my_map)
Make this Notebook Trusted to load map: File -> Trust Notebook

Now we can download this image to Drive so we can open it as a numpy array. First, let's select the bands we will use, as well as the resolution and coordinate system:

In [ ]:
image = S2.select(['B2','B3','B4','B5','B8']).reproject('EPSG:4326', None, 20)

Finally we can download our test image:

In [ ]:
task = ee.batch.Export.image.toDrive(image=image,
                                         crs='EPSG:4326',
                                         scale=20,
                                         fileFormat='GeoTIFF',
                                         description='Crops_USA' ,
                                         maxPixels=1e13,
                                         region= AOI)
In [ ]:
task.start()

After checking in Drive if the image is already stored, let's import the file with the raster and convert it into a numpy array:

In [ ]:
path = '/content/drive/MyDrive/Datasets_CV/Crops_USA.tif'
In [ ]:
with rasterio.open(path) as src:
    im = src.read()
In [ ]:
im = im.transpose([1,2,0])

Our image data range is between 0 and 1. Let's multiply by 2 to enhance the colors, then multiply by 255 and convert the float format to unit8. Thus, the resulting image is in the standard format of opencv and skImage, facilitating the application of operations.

In [ ]:
img = im*2*255
In [ ]:
img = img.astype('uint8')

Let's select the RGB bands so we can plot the image using matplotilib:

In [ ]:
red = img[:,:,2]
green = img[:,:,1]
blue = img[:,:,0]
In [ ]:
rgb = np.dstack((red, green, blue))
plt.figure(figsize=[12,12])
plt.imshow(rgb)
plt.axis('off')
Out[ ]:
(-0.5, 1120.5, 587.5, -0.5)
No description has been provided for this image

However for this analysis we will use NDVI:

In [ ]:
nir = im[:,:,4]
red = im[:,:,2]
In [ ]:
np.seterr(divide='ignore', invalid='ignore')
Out[ ]:
{'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'}
In [ ]:
ndvi = (nir - red) / (nir + red)
In [ ]:
plt.imshow(ndvi, cmap='RdYlGn')
plt.colorbar()
Out[ ]:
<matplotlib.colorbar.Colorbar at 0x78f1e86fd180>
No description has been provided for this image

First let's apply a Gaussian filter to reduce some harmful noise in our analysis:

In [ ]:
ndvi = cv2.GaussianBlur(ndvi,(3,3), 0);

After the filter, let's rescale the NDVI to a range between 0 and 1, then multiply it by 255 and convert it to uint8:

In [ ]:
ndvi_rescaled = cv2.normalize(ndvi, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX)
In [ ]:
ndvi_rescaled = ndvi_rescaled * 255
In [ ]:
ndvi_rescaled = ndvi_rescaled.astype('uint8')

Now we apply the canny to extract the edges of the image:

In [ ]:
edges = cv2.Canny(ndvi_rescaled,100,200)

So we can compare the result with the original NDVI image:

In [ ]:
plt.figure(figsize=[30,30])
plt.subplot(121),plt.imshow(ndvi_rescaled,cmap = 'RdYlGn')
plt.title('Original Image')
plt.subplot(122),plt.imshow(edges,cmap = 'gray')
plt.title('Edge Image')
Out[ ]:
Text(0.5, 1.0, 'Edge Image')
No description has been provided for this image

Hough Line Transform¶

The Hough Transform is a popular technique to detect any shape, if you can represent that shape in a mathematical form. It can detect the shape even if it is broken or distorted a little bit. We will see how it works for a line.

A line can be represented as y=mx+c or in a parametric form, as ρ=xcosθ+ysinθ where ρ is the perpendicular distance from the origin to the line, and θ is the angle formed by this perpendicular line and the horizontal axis measured in counter-clockwise (That direction varies on how you represent the coordinate system. This representation is used in OpenCV). Check the image below:

image.png

So if the line is passing below the origin, it will have a positive rho and an angle less than 180. If it is going above the origin, instead of taking an angle greater than 180, the angle is taken less than 180, and rho is taken negative. Any vertical line will have 0 degree and horizontal lines will have 90 degree.

Now let's see how the Hough Transform works for lines. Any line can be represented in these two terms, (ρ,θ). So first it creates a 2D array or accumulator (to hold the values of the two parameters) and it is set to 0 initially. Let rows denote the ρ and columns denote the θ. Size of array depends on the accuracy you need. Suppose you want the accuracy of angles to be 1 degree, you will need 180 columns. For ρ, the maximum distance possible is the diagonal length of the image. So taking one pixel accuracy, the number of rows can be the diagonal length of the image.

Consider a 100x100 image with a horizontal line at the middle. Take the first point of the line. You know its (x,y) values. Now in the line equation, put the values θ=0,1,2,....,180 and check the ρ you get. For every (ρ,θ) pair, you increment value by one in our accumulator in its corresponding (ρ,θ) cells. So now in accumulator, the cell (50,90) = 1 along with some other cells.

Now take the second point on the line. Do the same as above. Increment the values in the cells corresponding to (rho, theta) you got. This time, the cell (50,90) = 2. What you actually do is voting the (ρ,θ) values. You continue this process for every point on the line. At each point, the cell (50,90) will be incremented or voted up, while other cells may or may not be voted up. This way, at the end, the cell (50,90) will have maximum votes. So if you search the accumulator for maximum votes, you get the value (50,90) which says, there is a line in this image at a distance 50 from the origin and at angle 90 degrees.

This is how hough transform works for lines. It is simple, and may be you can implement it using Numpy on your own. Below is an image which shows the accumulator. Bright spots at some locations denote they are the parameters of possible lines in the image.

image.png

Everything explained above is encapsulated in the OpenCV function, cv.HoughLines(). It simply returns an array of :math:(rho, theta)` values. ρ is measured in pixels and θ is measured in radians. First parameter, Input image should be a binary image, so apply threshold or use canny edge detection before applying hough transform. Second and third parameters are ρ and θ accuracies respectively. Fourth argument is the threshold, which means the minimum vote it should get to be considered as a line. Remember, number of votes depends upon the number of points on the line. So it represents the minimum length of line that should be detected.

After the theory, let's apply the function to our image that contains the edges:

In [ ]:
red = img[:,:,2]
green = img[:,:,1]
blue = img[:,:,0]
In [ ]:
rgb = np.dstack((red, green, blue))
lines = cv2.HoughLines(edges, 1, np.pi/180.0, 350, np.array([]), 0, 0)
a,b,c = lines.shape
for i in range(a):
    rho = lines[i][0][0]
    theta = lines[i][0][1]
    a = np.cos(theta)
    b = np.sin(theta)
    x0, y0 = a*rho, b*rho
    pt1 = ( int(x0+1000*(-b)), int(y0+1000*(a)) )
    pt2 = ( int(x0-1000*(-b)), int(y0-1000*(a)) )
    cv2.line(rgb, pt1, pt2, (255, 0, 0), 2, cv2.LINE_AA)

Let's see the resulting lines applied to the RGB image:

In [ ]:
plt.figure(figsize=[12,12])
plt.imshow(rgb)
plt.axis('off')
Out[ ]:
(-0.5, 1120.5, 587.5, -0.5)
No description has been provided for this image

Probabilistc Hough Line Transform¶

In the hough transform, you can see that even for a line with two arguments, it takes a lot of computation. Probabilistic Hough Transform is an optimization of the Hough Transform we saw. It doesn't take all the points into consideration. Instead, it takes only a random subset of points which is sufficient for line detection. We just have to decrease the threshold. See image below which compares Hough Transform and Probabilistic Hough Transform in Hough space.

image.png

OpenCV implementation is based on Robust Detection of Lines Using the Progressive Probabilistic Hough Transform by Matas, J. and Galambos, C. and Kittler, J.V. The function used is cv.HoughLinesP(). It has two new arguments.

  • minLineLength - Minimum length of line. Line segments shorter than this are rejected.
  • maxLineGap - Maximum allowed gap between line segments to treat them as a single line.

Best thing is that, it directly returns the two endpoints of lines. In previous case, you got only the parameters of lines, and you had to find all the points. Here, everything is direct and simple.

Let's apply the Probabilistc Hough Line Transform to our example image:

In [ ]:
rgb = np.dstack((red, green, blue))
minLineLength = 1000
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/360,50,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[:,0]:
    cv2.line(rgb,(x1,y1),(x2,y2),(255,0,0),2)

Again, let's plot the resulting lines applied to the RGB image:

In [ ]:
plt.figure(figsize=[12,12])
plt.imshow(rgb)
plt.axis('off')
Out[ ]:
(-0.5, 1120.5, 587.5, -0.5)
No description has been provided for this image

Straight line Hough transform using skimage¶

Usually, lines are parameterised as , with a gradient and y-intercept c. However, this would mean that goes to infinity for vertical lines. Instead, we therefore construct a segment perpendicular to the line, leading to the origin. The line is represented by the length of that segment, , and the angle it makes with the x-axis, .

The Hough transform constructs a histogram array representing the parameter space (i.e., an matrix, for different values of the radius and different values of ). For each parameter combination, and , we then find the number of non-zero pixels in the input image that would fall close to the corresponding line, and increment the array at position appropriately.

We can think of each non-zero pixel “voting” for potential line candidates. The local maxima in the resulting histogram indicates the parameters of the most probable lines.

Let's import the functions we will use:

In [ ]:
from skimage.transform import hough_line, hough_line_peaks
from skimage.feature import canny
from skimage.draw import line

First we create an array with the angles in radians that we are going to compute. So we can apply the hough_lines function in our edges image. With the result we apply the Hough_peaks function. This function identifies most prominent lines separated by a certain angle and distance in a Hough transform. Non-maximum suppression with different sizes is applied separately in the first (distances) and second (angles) dimension of the Hough space to identify peaks.

In [ ]:
tested_angles = np.linspace(-np.pi / 2, np.pi / 2, 360)

h, theta, d = hough_line(edges, theta=tested_angles)
hpeaks = hough_line_peaks(h, theta, d, threshold=0.7 * h.max())

Now we can plot the result of the lines compared to the original NDVI image:

In [ ]:
fig, axes = plt.subplots(1, 2, figsize=(20, 10))
ax = axes.ravel()
ax[0].imshow(ndvi_rescaled,cmap = 'RdYlGn')
ax[0].set_title('Input image')
ax[0].set_axis_off()


ax[1].imshow(rgb)
row1, col1, _ = rgb.shape
for _, angle, dist in zip(*hpeaks):
    y0 = (dist - 0 * np.cos(angle)) / np.sin(angle)
    y1 = (dist - col1 * np.cos(angle)) / np.sin(angle)
    ax[1].plot((0, col1), (y0, y1), '-r')
ax[1].axis((0, col1, row1, 0))
ax[1].set_title('Detected lines')
ax[1].set_axis_off()
No description has been provided for this image

Harris Corner Detector¶

The Harris Corner Detector is a mathematical operator that finds features in an image. It is simple to compute, and is fast enough to work on computers. Also, it is popular because it is rotation, scale and illumination variation independent. However, the Shi-Tomasi corner detector, the one implemented in OpenCV, is an improvement of this corner detector.

A corner is a point whose local neighborhood stands in two dominant and different edge directions. In other words, a corner can be interpreted as the junction of two edges, where an edge is a sudden change in image brightness. Corners are the important features in the image, and they are generally termed as interest points which are invariant to translation, rotation and illumination. Although corners are only a small percentage of the image, they contain the most important features in restoring image information, and they can be used to minimize the amount of processed data for motion tracking, image stitching, building 2D mosaics, stereo vision, image representation and other related computer vision areas.

image.png

In order to capture the corners from the image, researchers have proposed many different corner detectors including the Kanade-Lucas-Tomasi (KLT) operator and the Harris operator which are most simple, efficient and reliable for use in corner detection. These two popular methodologies are both closely associated with and based on the local structure matrix. Compared to the Kanade-Lucas-Tomasi corner detector, the Harris corner detector provides good repeatability under changing illumination and rotation, and therefore, it is more often used in stereo matching and image database retrieval. Although there still exists drawbacks and limitations, the Harris corner detector is still an important and fundamental technique for many computer vision applications.

OpenCV has the function cv.cornerHarris() for this purpose. Its arguments are:

  • img - Input image. It should be grayscale and float32 type.
  • blockSize - It is the size of neighbourhood considered for corner detection
  • ksize - Aperture parameter of the Sobel derivative used.
  • k - Harris detector free parameter in the equation.

Let's apply this function in our edges image:

In [ ]:
corners = cv2.cornerHarris(edges, 2, 5, 0.001)

Let's use dilation to increase the size of the dots found:

In [ ]:
dilate_corners = cv2.dilate(corners,None)

And plot the resulting grayscale image:

In [ ]:
plt.figure(figsize=[12,12])
plt.imshow(dilate_corners, cmap='gray')
plt.axis('off')
Out[ ]:
(-0.5, 1120.5, 587.5, -0.5)
No description has been provided for this image

Let's Iterate through all the corners and draw them on the image (if they pass the threshold)

In [ ]:
thresh = 0.2*dilate_corners.max()
rgb = np.dstack((red, green, blue))
for j in range(0, dilate_corners.shape[0]):
    for i in range(0, dilate_corners.shape[1]):
        if(dilate_corners[j,i] > thresh):
            cv2.circle(rgb, (i, j), 1, (0,255,0), 1)

The resulting image with the corners:

In [ ]:
plt.figure(figsize=[30,30])
plt.subplot(121)
plt.imshow(edges,cmap = 'gray')
plt.title('Edge Image')
plt.subplot(122)
plt.imshow(rgb)
plt.title('Corner Detected Image')
Out[ ]:
Text(0.5, 1.0, 'Corner Detected Image')
No description has been provided for this image

Hough Circles Transform¶

The Hough Circle Transform works in a roughly analogous way to the Hough Line Transform. In the line detection case, a line was defined by two parameters (r,θ). In the circle case, we need three parameters to define a circle:

C:(xcenter,ycenter,r)

where (xcenter,ycenter) define the center position (green point) and r is the radius, which allows us to completely define a circle, as it can be seen below:

image.png

For sake of efficiency, OpenCV implements a detection method slightly trickier than the standard Hough Transform: The Hough gradient method, which is made up of two main stages. The first stage involves edge detection and finding the possible circle centers and the second stage finds the best radius for each candidate center.

Parameters -image 8-bit, single-channel, grayscale input image.

  • circles output vector of found circles(cv.CV_32FC3 type). Each vector is encoded as a 3-element floating-point vector (x,y,radius) .
  • method detection method(see cv.HoughModes). Currently, the only implemented method is HOUGH_GRADIENT
  • dp inverse ratio of the accumulator resolution to the image resolution. For example, if dp = 1 , the accumulator has the same resolution as the input image. If dp = 2 , the accumulator has half as big width and height.
  • minDist minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
  • param1 first method-specific parameter. In case of HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller).
  • param2 second method-specific parameter. In case of HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
  • minRadius minimum circle radius.
  • maxRadius maximum circle radius

Let's download a new Sentinel 2 image using the Google Earth Engine API again:

In [ ]:
AOI_circles = ee.Geometry.Polygon(
        [[[38.09353216316101, 30.383502496780785],
          [38.09353216316101, 30.095201467955714],
          [38.48766668464538, 30.095201467955714],
          [38.48766668464538, 30.383502496780785]]])
In [ ]:
startDateviz = ee.Date.fromYMD(2021,1,1);
endDateviz = ee.Date.fromYMD(2021,1,31);
collectionviz = ee.ImageCollection("COPERNICUS/S2_HARMONIZED").filterDate(startDateviz,endDateviz).filterBounds(AOI_circles).filterMetadata('CLOUDY_PIXEL_PERCENTAGE', 'less_than', 10);
In [ ]:
S2 = collectionviz.median().clip(AOI_circles).divide(10000)
In [ ]:
image_circle = S2.select(['B2','B3','B4','B5','B8']).reproject('EPSG:4326', None, 20)
In [ ]:
task = ee.batch.Export.image.toDrive(image=image_circle,
                                         crs='EPSG:4326',
                                         scale=20,
                                         fileFormat='GeoTIFF',
                                         description='Saudi_circles' ,
                                         maxPixels=1e13,
                                         region= AOI_circles)
In [ ]:
task.start()

Now let's open the image stored in Google Drive:

In [ ]:
path_circle = '/content/drive/MyDrive/Datasets_CV/Saudi_circles.tif'
In [ ]:
with rasterio.open(path_circle) as src:
    im_circle = src.read()
In [ ]:
im_circle = im_circle.transpose([1,2,0])

We will use the NDVI from this area in this example.

In [ ]:
nir = im_circle[:,:,4]
red = im_circle[:,:,2]
In [ ]:
ndvi_circles = (nir - red) / (nir + red)

We can see that there are several irrigation pivots in this area:

In [ ]:
plt.figure(figsize=[12,12])
plt.imshow(ndvi_circles, cmap='RdYlGn')
plt.colorbar()
Out[ ]:
<matplotlib.colorbar.Colorbar at 0x78f1d1f5f880>
No description has been provided for this image

We start by filtering the image to remove noise. We also normalized the image value range from -1 to 1 to 0 to 1:

In [ ]:
ndvi_circles = cv2.GaussianBlur(ndvi_circles,(3,3), 0);
In [ ]:
ndvi_c_rescaled = cv2.normalize(ndvi_circles, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX)
In [ ]:
ndvi_c_rescaled = ndvi_c_rescaled * 255
In [ ]:
ndvi_c_rescaled = ndvi_c_rescaled.astype('uint8')

Let's now find the edges present in the image using the canny function:

In [ ]:
edges_circle = cv2.Canny(ndvi_c_rescaled,100,200)
In [ ]:
plt.figure(figsize=[12,12])
plt.imshow(edges_circle,cmap = 'gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x78f1d2691420>
No description has been provided for this image

Here we are going to transform our NDVI image into an RGB image by applying the RdYlGn colormap.

In [ ]:
cm = plt.get_cmap('RdYlGn')
output = cm(ndvi_c_rescaled)
In [ ]:
output = output[:,:,0:3]*255

We convert to uint8 and plot the result:

In [ ]:
rgb_image = output.astype(np.uint8)
In [ ]:
plt.figure(figsize=(10,10))
plt.imshow(rgb_image)
plt.show()
No description has been provided for this image

Let's then apply the CHT to our edges image:

In [ ]:
circles = cv2.HoughCircles(edges_circle,cv2.HOUGH_GRADIENT,1,20, param1=300, param2=20, minRadius=10, maxRadius=30)

Then we can plot on the NDVI image in RGB the circles found in the image:

In [ ]:
if circles is not None:
	# convert the (x, y) coordinates and radius of the circles to integers
	circles = np.round(circles[0, :]).astype("int")
	# loop over the (x, y) coordinates and radius of the circles
	for (x, y, r) in circles:
		# draw the circle in the output image, then draw a rectangle
		# corresponding to the center of the circle
		cv2.circle(rgb_image, (x, y), r, (0, 0, 0), 4)
		cv2.rectangle(rgb_image, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
In [ ]:
plt.figure(figsize=(20,20))
plt.imshow(rgb_image)
plt.axis('off')
plt.show()
No description has been provided for this image

After obtaining the points, we will convert them to coordinates:

In [ ]:
from osgeo import osr, ogr, gdal
import pandas as pd
import geopandas as gpd
from rasterio.plot import show

Let's create some helper functions:

In [ ]:
def pixel_to_world(geo_matrix, x, y):
    ul_x = geo_matrix[0]
    ul_y = geo_matrix[3]
    x_dist = geo_matrix[1]
    y_dist = geo_matrix[5]
    _x = x * x_dist + ul_x
    _y = y * y_dist + ul_y
    return _x, _y


def build_transform_inverse(dataset, EPSG):
    source = osr.SpatialReference(wkt=dataset.GetProjection())
    target = osr.SpatialReference()
    target.ImportFromEPSG(EPSG)
    return osr.CoordinateTransformation(source, target)


def find_spatial_coordinate_from_pixel(dataset, transform, x, y):
    world_x, world_y = pixel_to_world(dataset.GetGeoTransform(), x, y)
    point = ogr.Geometry(ogr.wkbPoint)
    point.AddPoint(world_x, world_y)
    point.Transform(transform)
    return point.GetX(), point.GetY()
In [ ]:
ds = gdal.Open('/content/drive/MyDrive/Datasets_CV/Saudi_circles.tif')

Let's get the geometric transforms using EPSG 4326:

In [ ]:
_t = build_transform_inverse(ds, 4326)

Let's convert each point found and add it to a dataframe representing the coordinates:

In [ ]:
ls_x = []
ls_y = []
for (x, y, r) in circles:
  coordinates = find_spatial_coordinate_from_pixel(ds, _t, x, y)
  ls_x.append(coordinates[0])
  ls_y.append(coordinates[1])
In [ ]:
df_xy = pd.DataFrame([])
df_xy['x'] = ls_x
df_xy['y'] = ls_y

So just use geopandas to create a geodataframe with the data obtained:

In [ ]:
gdf = gpd.GeoDataFrame(df_xy, geometry=gpd.points_from_xy(df_xy['x'], df_xy['y']))

To check, let's plot the points along the image. If you need to export the points, the geopandas does this in a very simple way:

In [ ]:
src = rasterio.open(path_circle)
fig, ax = plt.subplots(figsize=(15, 15))
show((src, 4), ax=ax)
gdf.plot(ax=ax, facecolor='none', edgecolor='red')
Out[ ]:
<Axes: >
No description has been provided for this image
In [ ]:
gdf.to_file("drive/My Drive/Saudi-Pivots.shp")

Circular Hough Transforms on Skimage¶

In the following example, the Hough transform is used to detect coin positions and match their edges. We provide a range of plausible radii. For each radius, two circles are extracted and we finally keep the five most prominent candidates. The result shows that coin positions are well-detected.

Given a black circle on a white background, we first guess its radius (or a range of radii) to construct a new circle. This circle is applied on each black pixel of the original picture and the coordinates of this circle are voting in an accumulator. From this geometrical construction, the original circle center position receives the highest score.

Note that the accumulator size is built to be larger than the original picture in order to detect centers outside the frame. Its size is extended by two times the larger radius.

In [ ]:
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter

In the same way as in Houg_lines, let's create a vector with the possible radii:

In [ ]:
hough_radii = np.arange(10, 100, 5)
hough_res = hough_circle(edges_circle, hough_radii)

After using hough_circle_peaks we get the circles and we can plot on the NDVI image in RGB:

In [ ]:
rgb_image = output.astype(np.uint8)
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,total_num_peaks=200)
In [ ]:
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(20, 20))
for center_y, center_x, radius in zip(cy, cx, radii):
    circy, circx = circle_perimeter(center_y, center_x, radius,
                                    shape=rgb_image.shape)
    rgb_image[circy, circx] = (0, 0, 255)

ax.imshow(rgb_image, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
No description has been provided for this image

Blob detection¶

OpenCV provides a convenient way to detect blobs and filter them based on different characteristics. Let’s start with the simplest example:

First we create a binary NDVI image using a threshold:

In [ ]:
_,ndvi_thresh = cv2.threshold(ndvi_circles,0.2,1,cv2.THRESH_BINARY)
In [ ]:
plt.figure(figsize=(15,15))
plt.imshow(ndvi_thresh)
plt.axis('off')
plt.show()
No description has been provided for this image

Let's remove the noise using the morphological operations of erosion and dilation:

In [ ]:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
In [ ]:
ndvi_eroded = cv2.erode(ndvi_thresh,kernel,iterations = 3)
In [ ]:
ndvi_dilation = cv2.dilate(ndvi_eroded,kernel,iterations = 2)
In [ ]:
plt.figure(figsize=(15,15))
plt.imshow(ndvi_dilation)
plt.axis('off')
plt.show()
No description has been provided for this image

So we convert to uint8:

In [ ]:
ndvi_dilation = ndvi_dilation*255
ndvi_dilation = ndvi_dilation.astype('uint8')

Filtering Blobs by Color, Size and Shape: The parameters for SimpleBlobDetector can be set to filter the type of blobs we want.

  • By Color :

First you need to set filterByColor = 1. Set blobColor = 0 to select darker blobs, and blobColor = 255 for lighter blobs. By Size : You can filter the blobs based on size by setting the parameters filterByArea = 1, and appropriate values for minArea and maxArea. E.g. setting minArea = 100 will filter out all the blobs that have less then 100 pixels.By Shape : Now shape has three different parameters.

  • Circularity :

This just measures how close to a circle the blob is. E.g. a regular hexagon has higher circularity than say a square. To filter by circularity, set filterByCircularity = 1. Then set appropriate values for minCircularity and maxCircularity.

  • Convexity :

A picture is worth a thousand words. Convexity is defined as the (Area of the Blob / Area of it’s convex hull). Now, Convex Hull of a shape is the tightest convex shape that completely encloses the shape. To filter by convexity, set filterByConvexity = 1, followed by setting 0 ≤ minConvexity ≤ 1and maxConvexity ( ≤ 1)

  • Inertia Ratio :

Don’t let this scare you. Mathematicians often use confusing words to describe something very simple. All you have to know is that this measures how elongated a shape is. E.g. for a circle, this value is 1, for an ellipse it is between 0 and 1, and for a line it is 0. To filter by inertia ratio, set filterByInertia = 1, and set 0 ≤ minInertiaRatio ≤ 1and maxInertiaRatio (≤ 1 )appropriately.

In [ ]:
params = cv2.SimpleBlobDetector_Params()

params.thresholdStep = 1
params.minRepeatability = 1

params.blobColor = 255
params.minThreshold = 200
params.maxThreshold = 255


params.filterByArea = True
params.minArea = 100


params.filterByCircularity = True
params.minCircularity = 0.6;
params.maxCircularity = 1.0


params.filterByConvexity = True
params.minConvexity = 0.6;
params.maxConvexity = 1.0;



params.filterByInertia = True
params.minInertiaRatio = 0.5;
params.maxInertiaRatio = 1.0;

After defining the parameters, we create an instance of simplerBlobDetector:

In [ ]:
detector = cv2.SimpleBlobDetector_create(params)

So we apply to our filtered image:

In [ ]:
keypoints = detector.detect(ndvi_dilation)

Resultados de tradução The blobs found are plotted on our NDVI image in RGB:

In [ ]:
rgb_image = output.astype(np.uint8)
im_with_keypoints = cv2.drawKeypoints(rgb_image, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
In [ ]:
plt.figure(figsize=(15,15))
plt.imshow(im_with_keypoints)
plt.axis('off')
plt.show()
No description has been provided for this image

Blob detection on Skimage¶

Blobs are bright on dark or dark on bright regions in an image. In this example, blobs are detected using 3 algorithms.

  • Laplacian of Gaussian (LoG)

This is the most accurate and slowest approach. It computes the Laplacian of Gaussian images with successively increasing standard deviation and stacks them up in a cube. Blobs are local maximas in this cube. Detecting larger blobs is especially slower because of larger kernel sizes during convolution. Only bright blobs on dark backgrounds are detected. See skimage.feature.blob_log() for usage.

  • Difference of Gaussian (DoG)

This is a faster approximation of LoG approach. In this case the image is blurred with increasing standard deviations and the difference between two successively blurred images are stacked up in a cube. This method suffers from the same disadvantage as LoG approach for detecting larger blobs. Blobs are again assumed to be bright on dark. See skimage.feature.blob_dog() for usage.

  • Determinant of Hessian (DoH)

This is the fastest approach. It detects blobs by finding maximas in the matrix of the Determinant of Hessian of the image. The detection speed is independent of the size of blobs as internally the implementation uses box filters instead of convolutions. Bright on dark as well as dark on bright blobs are detected. The downside is that small blobs (<3px) are not detected accurately. See skimage.feature.blob_doh() for usage.

In [ ]:
from skimage.feature import blob_dog, blob_log, blob_doh

We started by testing blob_log:

In [ ]:
blobs = blob_log(ndvi_dilation, min_sigma=10)
In [ ]:
rgb_image = output.astype(np.uint8)
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(rgb_image)
for blob in blobs:
    y, x, area = blob
    ax.add_patch(plt.Circle((x, y), area*np.sqrt(2),
                 color='b', fill=False))
No description has been provided for this image

Now the blob_dog:

In [ ]:
blobs = blob_dog(ndvi_dilation, min_sigma=10)
In [ ]:
rgb_image = output.astype(np.uint8)
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(rgb_image)
for blob in blobs:
    y, x, area = blob
    ax.add_patch(plt.Circle((x, y), area*np.sqrt(2),
                 color='b', fill=False))
No description has been provided for this image

And finally the blob_doh:

In [ ]:
blobs = blob_doh(ndvi_dilation, min_sigma=10)
In [ ]:
rgb_image = output.astype(np.uint8)
fig, ax = plt.subplots(figsize=(12,12))
ax.imshow(rgb_image)
for blob in blobs:
    y, x, area = blob
    ax.add_patch(plt.Circle((x, y), area*np.sqrt(2),
                 color='b', fill=False))
No description has been provided for this image

References:

https://docs.opencv.org/3.4/da/d22/tutorial_py_canny.html

https://learnopencv.com/edge-detection-using-opencv/

https://scikit-image.org/docs/dev/auto_examples/edges/plot_line_hough_transform.html

https://stackoverflow.com/questions/61466540/hough-lines-detecting-too-many-lines

https://docs.opencv.org/3.4/d4/d70/tutorial_hough_circle.html

https://docs.opencv.org/4.5.2/d3/de5/tutorial_js_houghcircles.html

https://scikit-image.org/docs/dev/auto_examples/edges/plot_line_hough_transform.html

https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_blob.html

https://medium.com/swlh/image-processing-with-python-detecting-blobs-in-digital-images-edebfd22328c

https://learnopencv.com/blob-detection-using-opencv-python-c/

https://docs.opencv.org/4.5.2/dc/d0d/tutorial_py_features_harris.html

https://medium.com/data-breach/introduction-to-harris-corner-detector-32a88850b3f6

https://aishack.in/tutorials/harris-corner-detector/

https://en.wikipedia.org/wiki/Harris_corner_detector